Goto

Collaborating Authors

 quantum simulation


Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits

Neural Information Processing Systems

Variational quantum circuits have been widely employed in quantum simulation and quantum machine learning in recent years. However, quantum circuits with random structures have poor trainability due to the exponentially vanishing gradient with respect to the circuit depth and the qubit number. This result leads to a general standpoint that deep quantum circuits would not be feasible for practical tasks. In this work, we propose an initialization strategy with theoretical guarantees for the vanishing gradient problem in general deep quantum circuits. Specifically, we prove that under proper Gaussian initialized parameters, the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase. Our theoretical results hold for both the local and the global observable cases, where the latter was believed to have vanishing gradients even for very shallow circuits. Experimental results verify our theoretical findings in quantum simulation and quantum chemistry.


TorchQuantumDistributed

Knitter, Oliver, Mei, Jonathan, Yamada, Masako, Roetteler, Martin

arXiv.org Artificial Intelligence

TorchQuantumDistributed (tqd) is a PyTorch-based [Paszke et al., 2019] library for accelerator-agnostic differentiable quantum state vector simulation at scale. This enables studying the behavior of learnable parameterized near-term and fault- tolerant quantum circuits with high qubit counts.


Scalable Quantum State Preparation via Large-Language-Model-Driven Discovery

Cao, Qing-Hong, Hou, Zong-Yue, Li, Ying-Ying, Liu, Xiaohui, Song, Zhuo-Yang, Zhang, Liang-Qi, Zhang, Shutao, Zhao, Ke

arXiv.org Artificial Intelligence

Efficient quantum state preparation remains a central challenge in first-principles quantum simulations of dynamics in quantum field theories, where the Hilbert space is intrinsically infinite-dimensional. Here, we introduce a large language model (LLM)-assisted framework for quantum-circuit design that systematically scales state-preparation circuits to large lattice volumes. Applied to a 1+1d XY spin chain, the LLM autonomously discovers a compact 4-parameter circuit that captures boundary-induced symmetry breaking with sub-percent energy deviation, enabling successful validation on the \texttt{Zuchongzhi} quantum processor. Guided by this insight, we extend the framework to 2+1d quantum field theories, where scalable variational ansätze have remained elusive. For a scalar field theory, the search yields a symmetry-preserving, 3-parameter shallow-depth ansatz whose optimized parameters converge to size-independent constants for lattices $n \ge 4$, providing, to our knowledge, the first scalable ansatz for this class of 2+1d models. Our results establish a practical route toward AI-assisted, human-guided discovery in quantum simulation.


New quantum computer is on the path to unravelling superconductivity

New Scientist

Researchers at the quantum computing firm Quantinuum used a new Helios-1 quantum computer to simulate a mathematical model that has long been used to study superconductivity. These simulations are not out of reach for conventional computers, but this advance sets the stage for quantum computers to become useful tools for materials science . Superconductors conduct electricity with perfect efficiency, but they currently only work at temperatures too low to be practical. For decades, physicists have been trying to understand how to tweak their structure to make them work at room temperature, and many believe answers will come from a mathematical framework called the Fermi-Hubbard model. This potential makes it one of the most important models in all condensed matter physics, says Quantinuum's Henrik Dreyer . Conventional computers can run exceptional simulations of the Fermi-Hubbard model but struggle with very large samples or cases where the materials it describes change over time.



RhoDARTS: Differentiable Quantum Architecture Search with Density Matrix Simulations

Kumar, Swagat, Zaech, Jan-Nico, Wilmott, Colin Michael, Van Gool, Luc

arXiv.org Artificial Intelligence

Variational Quantum Algorithms (VQAs) are a promising approach to leverage Noisy Intermediate-Scale Quantum (NISQ) computers. However, choosing optimal quantum circuits that efficiently solve a given VQA problem is a non-trivial task. Quantum Architecture Search (QAS) algorithms enable automatic generation of quantum circuits tailored to the provided problem. Existing QAS approaches typically adapt classical neural architecture search techniques, training machine learning models to sample relevant circuits, but often overlook the inherent quantum nature of the circuits they produce. By reformulating QAS from a quantum perspective, we propose a sampling-free differentiable QAS algorithm that models the search process as the evolution of a quantum mixed state, which emerges from the search space of quantum circuits. The mixed state formulation also enables our method to incorporate generic noise models, for example the depolarizing channel, which cannot be modeled by state vector simulation. We validate our method by finding circuits for state initialization and Hamiltonian optimization tasks, namely the variational quantum eigensolver and the unweighted max-cut problems. We show our approach to be comparable to, if not outperform, existing QAS techniques while requiring significantly fewer quantum simulations during training, and also show improved robustness levels to noise.


Digital-Analog Quantum Machine Learning

Lamata, Lucas

arXiv.org Artificial Intelligence

Machine Learning algorithms are extensively used in an increasing number of systems, applications, technologies, and products, both in industry and in society as a whole. They enable computing devices to learn from previous experience and therefore improve their performance in a certain context or environment. In this way, many useful possibilities have been made accessible. However, dealing with an increasing amount of data poses difficulties for classical devices. Quantum systems may offer a way forward, possibly enabling to scale up machine learning calculations in certain contexts. On the other hand, quantum systems themselves are also hard to scale up, due to decoherence and the fragility of quantum superpositions. In the short and mid term, it has been evidenced that a quantum paradigm that combines evolution under large analog blocks with discrete quantum gates, may be fruitful to achieve new knowledge of classical and quantum systems with no need of having a fault-tolerant quantum computer. In this Perspective, we review some recent works that employ this digital-analog quantum paradigm to carry out efficient machine learning calculations with current quantum devices.


Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits

Neural Information Processing Systems

Variational quantum circuits have been widely employed in quantum simulation and quantum machine learning in recent years. However, quantum circuits with random structures have poor trainability due to the exponentially vanishing gradient with respect to the circuit depth and the qubit number. This result leads to a general standpoint that deep quantum circuits would not be feasible for practical tasks. In this work, we propose an initialization strategy with theoretical guarantees for the vanishing gradient problem in general deep quantum circuits. Specifically, we prove that under proper Gaussian initialized parameters, the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase.


Meta-Designing Quantum Experiments with Language Models

Arlt, Sören, Duan, Haonan, Li, Felix, Xie, Sang Michael, Wu, Yuhuai, Krenn, Mario

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) has the potential to significantly advance scientific discovery by finding solutions beyond human capabilities. However, these super-human solutions are often unintuitive and require considerable effort to uncover underlying principles, if possible at all. Here, we show how a code-generating language model trained on synthetic data can not only find solutions to specific problems but can create meta-solutions, which solve an entire class of problems in one shot and simultaneously offer insight into the underlying design principles. Specifically, for the design of new quantum physics experiments, our sequence-to-sequence transformer architecture generates interpretable Python code that describes experimental blueprints for a whole class of quantum systems. We discover general and previously unknown design rules for infinitely large classes of quantum states. The ability to automatically generate generalized patterns in readable computer code is a crucial step toward machines that help discover new scientific understanding -- one of the central aims of physics.


Expanding the Horizon: Enabling Hybrid Quantum Transfer Learning for Long-Tailed Chest X-Ray Classification

Chan, Skylar, Kulkarni, Pranav, Yi, Paul H., Parekh, Vishwa S.

arXiv.org Artificial Intelligence

Quantum machine learning (QML) has the potential for improving the multi-label classification of rare, albeit critical, diseases in large-scale chest x-ray (CXR) datasets due to theoretical quantum advantages over classical machine learning (CML) in sample efficiency and generalizability. While prior literature has explored QML with CXRs, it has focused on binary classification tasks with small datasets due to limited access to quantum hardware and computationally expensive simulations. To that end, we implemented a Jax-based framework that enables the simulation of medium-sized qubit architectures with significant improvements in wall-clock time over current software offerings. We evaluated the performance of our Jax-based framework in terms of efficiency and performance for hybrid quantum transfer learning for long-tailed classification across 8, 14, and 19 disease labels using large-scale CXR datasets. The Jax-based framework resulted in up to a 58% and 95% speed-up compared to PyTorch and TensorFlow implementations, respectively. However, compared to CML, QML demonstrated slower convergence and an average AUROC of 0.70, 0.73, and 0.74 for the classification of 8, 14, and 19 CXR disease labels. In comparison, the CML models had an average AUROC of 0.77, 0.78, and 0.80 respectively. In conclusion, our work presents an accessible implementation of hybrid quantum transfer learning for long-tailed CXR classification with a computationally efficient Jax-based framework.